38 research outputs found

    Investigating spoken language comprehension as perceptual inference

    Get PDF

    Neural tracking of phrases in spoken language comprehension is automatic and task-dependent

    Get PDF
    Linguistic phrases are tracked in sentences even though there is no one-to-one acoustic phrase marker in the physical signal. This phenomenon suggests an automatic tracking of abstract linguistic structure that is endogenously generated by the brain. However, all studies investigating linguistic tracking compare conditions where either relevant information at linguistic timescales is available, or where this information is absent altogether (e.g., sentences versus word lists during passive listening). It is therefore unclear whether tracking at phrasal timescales is related to the content of language, or rather, results as a consequence of attending to the timescales that happen to match behaviourally relevant information. To investigate this question, we presented participants with sentences and word lists while recording their brain activity with magnetoencephalography (MEG). Participants performed passive, syllable, word, and word-combination tasks corresponding to attending to four different rates: one they would naturally attend to, syllable-rates, word-rates, and phrasal-rates, respectively. We replicated overall findings of stronger phrasal-rate tracking measured with mutual information for sentences compared to word lists across the classical language network. However, in the inferior frontal gyrus (IFG) we found a task effect suggesting stronger phrasal-rate tracking during the word-combination task independent of the presence of linguistic structure, as well as stronger delta-band connectivity during this task. These results suggest that extracting linguistic information at phrasal rates occurs automatically with or without the presence of an additional task, but also that IFG might be important for temporal integration across various perceptual domains

    From language to language-ish: How brain-like is an LSTM representation of nonsensical language stimuli?

    Get PDF
    The representations generated by many mod- els of language (word embeddings, recurrent neural networks and transformers) correlate to brain activity recorded while people read. However, these decoding results are usually based on the brain’s reaction to syntactically and semantically sound language stimuli. In this study, we asked: how does an LSTM (long short term memory) language model, trained (by and large) on semantically and syntac- tically intact language, represent a language sample with degraded semantic or syntactic information? Does the LSTM representation still resemble the brain’s reaction? We found that, even for some kinds of nonsensical lan- guage, there is a statistically significant rela- tionship between the brain’s activity and the representations of an LSTM. This indicates that, at least in some instances, LSTMs and the human brain handle nonsensical data similarly

    Linguistic structure and meaning organize neural oscillations into a content-specific hierarchy

    No full text
    Neural oscillations track linguistic information during speech comprehension (e.g., Ding et al., 2016; Keitel et al., 2018), and are known to be modulated by acoustic landmarks and speech intelligibility (e.g., Doelling et al., 2014; Zoefel & VanRullen, 2015). However, studies investigating linguistic tracking have either relied on non-naturalistic isochronous stimuli or failed to fully control for prosody. Therefore, it is still unclear whether low frequency activity tracks linguistic structure during natural speech, where linguistic structure does not follow such a palpable temporal pattern. Here, we measured electroencephalography (EEG) and manipulated the presence of semantic and syntactic information apart from the timescale of their occurrence, while carefully controlling for the acoustic-prosodic and lexical-semantic information in the signal. EEG was recorded while 29 adult native speakers (22 women, 7 men) listened to naturally-spoken Dutch sentences, jabberwocky controls with morphemes and sentential prosody, word lists with lexical content but no phrase structure, and backwards acoustically-matched controls. Mutual information (MI) analysis revealed sensitivity to linguistic content: MI was highest for sentences at the phrasal (0.8-1.1 Hz) and lexical timescale (1.9-2.8 Hz), suggesting that the delta-band is modulated by lexically-driven combinatorial processing beyond prosody, and that linguistic content (i.e., structure and meaning) organizes neural oscillations beyond the timescale and rhythmicity of the stimulus. This pattern is consistent with neurophysiologically inspired models of language comprehension (Martin, 2016, 2020; Martin & Doumas, 2017) where oscillations encode endogenously generated linguistic content over and above exogenous or stimulus-driven timing and rhythm information

    Discourse markers activate their, <i>like</i>, cohort competitors

    Get PDF
    Speech in everyday conversations is riddled with discourse markers (DMs), such as well, you know, and like. However, in many lab-based studies of speech comprehension, such DMs are typically absent from the carefully articulated and highly controlled speech stimuli. As such, little is known about how these DMs influence online word recognition. The present study specifically investigated the online processing of DM like and how it influences the activation of words in the mental lexicon. We specifically targeted the cohort competitor (CC) effect in the Visual World Paradigm: Upon hearing spoken instructions to “pick up the beaker,” human listeners also typically fixate—next to the target object—referents that overlap phonologically with the target word (cohort competitors such as beetle; CCs). However, several studies have argued that CC effects are constrained by syntactic, semantic, pragmatic, and discourse constraints. Therefore, the present study investigated whether DM like influences online word recognition by activating its cohort competitors (e.g., lightbulb). In an eye-tracking experiment using the Visual World Paradigm, we demonstrate that when participants heard spoken instructions such as “Now press the button for the, like … unicycle,” they showed anticipatory looks to the CC referent (lightbulb)well before hearing the target. This CC effect was sustained for a relatively long period of time, even despite hearing disambiguating information (i.e., the /k/ in like). Analysis of the reaction times also showed that participants were significantly faster to select CC targets (lightbulb) when preceded by DM like. These findings suggest that seemingly trivial DMs, such as like, activate their CCs, impacting online word recognition. Thus, we advocate a more holistic perspective on spoken language comprehension in naturalistic communication, including the processing of DMs

    Preoperative serum uric acid predicts incident acute kidney injury following cardiac surgery

    No full text
    Abstract Background Acute kidney injury (AKI) following cardiac surgery is a frequent complication and several risk factors increasing its incidence have already been characterized. This study evaluates the influence of preoperative increased serum uric acid (SUA) levels in comparison with other known risk factors on the incidence of AKI following cardiac surgery. Methods During a period of 5 month, 247 patients underwent elective coronary artery bypass grafting, valve replacement/ repair or combined bypass and valve surgery. Datas were prospectively analyzed. Primary endpoint was the incidence of AKI as defined by the AKI criteria comparing patients with preoperative serum uric acid (SUA) levels below versus above the median. Multivariate logistic regression analysis was used to identify independent predictors of postoperative AKI. Results Thirty (12.1%) of the 247 patients developed postoperative AKI, 24 of 30 (80%) had preoperative SUA- levels above the median (≥373 μmol/l) (OR: 4.680, CI 95% 1.840; 11.904, p = 0.001). In the multivariate analysis SUA levels above the median (OR: 5.497, CI 95% 1.772; 17.054, p = 0.003), cardiopulmonary bypass (CPB) time > 90 min (OR: 4.595, CI 95% 1.587; 13.305, p = 0.005), cardiopulmonary bypass (CPB) > 30 kg/m2 (OR: 3.208, CI 95% 1.202; 8.562; p = 0.02), and preoperative elevated serum-creatinine levels (OR: 1.015, CI 95% 1.001; 1.029, p = 0.04) were independently associated with postoperative AKI. Conclusions Serum uric acid is an independent risk marker for AKI after cardiac surgery. From all evaluated factors it showed the highest odds ratio
    corecore